Search results for "Eigendecomposition of a matrix"

showing 4 items of 4 documents

A variational method for spectral functions

2016

The Generalized Eigenvalue Problem (GEVP) has been used extensively in the past in order to reliably extract energy levels from time-dependent Euclidean correlators calculated in Lattice QCD. We propose a formulation of the GEVP in frequency space. Our approach consists of applying the model-independent Backus-Gilbert method to a set of Euclidean two-point functions with common quantum numbers. A GEVP analysis in frequency space is then applied to a matrix of estimators that allows us, among other things, to obtain particular linear combinations of the initial set of operators that optimally overlap to different local regions in frequency. We apply this method to lattice data from NRQCD. Th…

High Energy Physics - LatticeVariational methodLattice (order)Quantum mechanicsHigh Energy Physics - Lattice (hep-lat)Euclidean geometryLattice field theoryFOS: Physical sciencesEstimatorApplied mathematicsLattice QCDLinear combinationEigendecomposition of a matrixProceedings of 34th annual International Symposium on Lattice Field Theory — PoS(LATTICE2016)
researchProduct

What is the Best Method of Matrix Adjustment? A Formal Answer by a Return to the World of Vectors

2003

The principle of matrix adjustment methods consists into finding what is the matrix which is the closest to an initial matrix but with respect of the column and row sum totals of a second matrix. In order to help deciding which matrix-adjustment method is the better, the article returns to the simpler problem of vector adjustment then back to matrices. The information-lost minimization (biproportional methods and RAS) leads to a multiplicative form and generalize the linear model. On the other hand, the distance minimization which leads to an additive form tends to distort the data by giving a result asymptotically independent to the initial matrix. The result allows concluding non-ambiguou…

Matrix (mathematics)symbols.namesakeMathematical optimizationGaussian eliminationMatrix splittingConvergent matrixsymbolsBlock matrixSquare matrixAugmented matrixEigendecomposition of a matrixMathematicsSSRN Electronic Journal
researchProduct

Online Principal Component Analysis in High Dimension: Which Algorithm to Choose?

2017

Summary Principal component analysis (PCA) is a method of choice for dimension reduction. In the current context of data explosion, online techniques that do not require storing all data in memory are indispensable to perform the PCA of streaming data and/or massive data. Despite the wide availability of recursive algorithms that can efficiently update the PCA when new data are observed, the literature offers little guidance on how to select a suitable algorithm for a given application. This paper reviews the main approaches to online PCA, namely, perturbation techniques, incremental methods and stochastic optimisation, and compares the most widely employed techniques in terms statistical a…

Statistics and ProbabilityComputer scienceComputationDimensionality reductionIncremental methods02 engineering and technologyMissing data01 natural sciences010104 statistics & probabilityData explosionStreaming dataPrincipal component analysis0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processing0101 mathematicsStatistics Probability and UncertaintyAlgorithmEigendecomposition of a matrixInternational Statistical Review
researchProduct

Null Space Based Image Recognition Using Incremental Eigendecomposition

2011

An incremental approach to the discriminative common vector (DCV) method for image recognition is considered. Discriminative projections are tackled in the particular context in which new training data becomes available and learned subspaces may need continuous updating. Starting from incremental eigendecomposition of scatter matrices, an efficient updating rule based on projections and orthogonalization is given. The corresponding algorithm has been empirically assessed and compared to its batch counterpart. The same good properties and performance results of the original method are kept but with a dramatic decrease in the computation needed.

Training setbusiness.industryComputationContext (language use)Pattern recognitionRule-based systemLinear subspaceDiscriminative modelComputer visionArtificial intelligencebusinessOrthogonalizationEigendecomposition of a matrixMathematics
researchProduct